Efficient L1/Lq Norm Regularization
نویسندگان
چکیده
Sparse learning has recently received increasing attention in many areas including machine learning, statistics, and applied mathematics. The mixed-norm regularization based on the l1/lq norm with q > 1 is attractive in many applications of regression and classification in that it facilitates group sparsity in the model. The resulting optimization problem is, however, challenging to solve due to the structure of the l1/lq-regularization. Existing work deals with special cases including q = 2,∞, and they can not be easily extended to the general case. In this paper, we propose an efficient algorithm based on the accelerated gradient method for solving the l1/lq-regularized problem, which is applicable for all values of q larger than 1, thus significantly extending existing work. One key building block of the proposed algorithm is the l1/lq-regularized Euclidean projection (EP1q). Our theoretical analysis reveals the key properties of EP1q and illustrates why EP1q for the general q is significantly more challenging to solve than the special cases. Based on our theoretical analysis, we develop an efficient algorithm for EP1q by solving two zero finding problems. Experimental results demonstrate the efficiency of the proposed algorithm.
منابع مشابه
Efficient Mixed-Norm Regularization: Algorithms and Safe Screening Methods
Sparse learning has recently received increasing attention in many areas including machine learning, statistics, and applied mathematics. The mixed-norm regularization based on the l1/lq norm with q > 1 is attractive in many applications of regression and classification in that it facilitates group sparsity in the model. The resulting optimization problem is, however, challenging to solve due t...
متن کاملEfficient Online and Batch Learning Using Forward Backward Splitting
We describe, analyze, and experiment with a framework for empirical loss minimization with regularization. Our algorithmic framework alternates between two phases. On each iteration we first perform an unconstrained gradient descent step. We then cast and solve an instantaneous optimization problem that trades off minimization of a regularization term while keeping close proximity to the result...
متن کاملEfficient Learning using Forward-Backward Splitting
We describe, analyze, and experiment with a new framework for empirical loss minimization with regularization. Our algorithmic framework alternates between two phases. On each iteration we first perform an unconstrained gradient descent step. We then cast and solve an instantaneous optimization problem that trades off minimization of a regularization term while keeping close proximity to the re...
متن کاملEfficient ℓq Minimization Algorithms for Compressive Sensing Based on Proximity Operator
This paper considers solving the unconstrained lq-norm (0 ≤ q < 1) regularized least squares (lq-LS) problem for recovering sparse signals in compressive sensing. We propose two highly efficient first-order algorithms via incorporating the proximity operator for nonconvex lq-norm functions into the fast iterative shrinkage/thresholding (FISTA) and the alternative direction method of multipliers...
متن کاملA Primal Dual - Interior Point Framework for Using the L1-Norm or the L2-Norm on the Data and Regularization Terms of Inverse Problems
Maximum A Posteriori (MAP) estimates in inverse problems are often based on quadratic formulations, corresponding to a Least Squares fitting of the data and to the use of the L2 norm on the regularization term. While the implementation of this estimation is straightforward and usually based on the Gauss Newton method, resulting estimates are sensitive to outliers, and spatial distributions of t...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- CoRR
دوره abs/1009.4766 شماره
صفحات -
تاریخ انتشار 2010